|
The Grey Wolf Optimizer (GWO) is a recently proposed swarm-based meta-heuristic.〔S. Mirjalili, S. M. Mirjalili, and A. Lewis, "Grey Wolf Optimizer," Advances in Engineering Software, vol. 69, pp. 46-61, 2014.〕 This algorithm mimics the social leadership and hunting behaviour of gray wolves in nature. The main phases of hunt in a pack of wolves have been mathematically modeled to solve optimization problems. == Algorithm description == In this algorithm the population is divided into four groups: alpha (α), beta (β), delta (δ), and omega (ω). The first three fittest wolves are considered as α, β, and δ who guide other wolves (ω) toward promising areas of the search space. During optimization, the wolves update their positions around α, β, or δ as follows: where indicates the current iteration, and are coefficient vectors , is the position vector of the prey, and indicates the position vector of a grey wolf. With the above mathematical equations, a wolf in the position of (X,Y) is able to relocate itself around the prey with the proposed equations. The random parameters A and C allow the wolves to relocate to any position in the continuous space around the prey. In the GWO algorithm, it is always assumed that α, β, and δ are likely to be the position of the prey (optimum). During optimization, the first three best solutions obtained so far are assumed as α, β, and δ respectively. Then, other wolves are considered as ω and able to re-position with respect to α, β, and δ. The mathematical model proposed to re-adjust the position of ω wolves are as follows: shows the position of the alpha, is the position of delta, , indicates the position of the current solution. After defining the distances, the final position of the current solution is calculated as follows: where shows the position of the beta, , are random vectors, and t indicates the number of iterations. The above equations define the step size of the ω wolf toward α, β, and δ. They then define the final position of the ω wolves. It may be observed that there are two vectors: and . These two vectors are random and adaptive vectors that provide exploration and exploitation for the GWO algorithm. The exploration occurs when is greater than 1 or less than -1. The vector C also promotes exploration when it is greater than 1. In contrary, the exploitation is emphasized when |A|<1 and C<1 (see Fig. 2). It should be noted here that A is decreased linearly during optimization in order to emphasize exploitation as the iteration increase. However, C is generated randomly throughout optimization to emphasize exploration/exploitation at any stage, a very helpful mechanism for resolving local optima entrapment. After all, the general steps of the GWO algorithm are as follows: Initialize a population of wolves randomly, based on the upper and lower bounds of the variables Calculate the corresponding objective value for each wolf Choose the first three best wolves and save them as α, β, and δ Update the position of the rest of the population (ω wolves) Update parameters a, A, and C Go to step 2 if the end criterion is not satisfied Return the position of α as the best approximated optimum Mirjalili et al. showed that the GWO algorithm is able to provide very competitive results compared to other well-known meta-heuristics.〔S. Mirjalili, S. M. Mirjalili, and A. Lewis, "Grey Wolf Optimizer," Advances in Engineering Software, vol. 69, pp. 46-61, 2014.〕 For one, the exploration of this algorithm is very high and requires it to avoid local optima. Moreover, the balance of exploration and exploitation is very simple and effective in solving challenging problems. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Grey wolf optimizer」の詳細全文を読む スポンサード リンク
|